1 |
Universals of Linguistic Idiosyncrasy in Multilingual Computational Linguistics ; Universals of Linguistic Idiosyncrasy in Multilingual Computational Linguistics: Dagstuhl Seminar 21351
|
|
|
|
In: Universals of Linguistic Idiosyncrasy in Multilingual Computational Linguistics ; https://hal.archives-ouvertes.fr/hal-03507948 ; Universals of Linguistic Idiosyncrasy in Multilingual Computational Linguistics, Aug 2021, pp.89--138, 2021, 2192-5283. ⟨10.4230/DagRep.11.7.89⟩ ; https://gitlab.com/unlid/dagstuhl-seminar/-/wikis/home (2021)
|
|
BASE
|
|
Show details
|
|
2 |
Universals of Linguistic Idiosyncrasy in Multilingual Computational Linguistics (Dagstuhl Seminar 21351)
|
|
|
|
BASE
|
|
Show details
|
|
7 |
Syntactic Nuclei in Dependency Parsing -- A Multilingual Exploration ...
|
|
|
|
BASE
|
|
Show details
|
|
9 |
Universals of Linguistic Idiosyncrasy in Multilingual Computational Linguistics (Dagstuhl Seminar 21351) ...
|
|
|
|
BASE
|
|
Show details
|
|
10 |
Attention Can Reflect Syntactic Structure (If You Let It) ...
|
|
|
|
BASE
|
|
Show details
|
|
11 |
What Should/Do/Can LSTMs Learn When Parsing Auxiliary Verb Constructions? ...
|
|
|
|
BASE
|
|
Show details
|
|
12 |
Schrödinger's Tree -- On Syntax and Neural Language Models ...
|
|
|
|
BASE
|
|
Show details
|
|
16 |
Køpsala: Transition-Based Graph Parsing via Efficient Training and Effective Encoding ...
|
|
|
|
BASE
|
|
Show details
|
|
17 |
Understanding Pure Character-Based Neural Machine Translation: The Case of Translating Finnish into English ...
|
|
|
|
BASE
|
|
Show details
|
|
18 |
Understanding Pure Character-Based Neural Machine Translation: The Case of Translating Finnish into English ...
|
|
|
|
BASE
|
|
Show details
|
|
19 |
Universal Dependencies v2: An Evergrowing Multilingual Treebank Collection ...
|
|
|
|
BASE
|
|
Show details
|
|
20 |
Do Neural Language Models Show Preferences for Syntactic Formalisms? ...
|
|
|
|
Abstract:
Recent work on the interpretability of deep neural language models has concluded that many properties of natural language syntax are encoded in their representational spaces. However, such studies often suffer from limited scope by focusing on a single language and a single linguistic formalism. In this study, we aim to investigate the extent to which the semblance of syntactic structure captured by language models adheres to a surface-syntactic or deep syntactic style of analysis, and whether the patterns are consistent across different languages. We apply a probe for extracting directed dependency trees to BERT and ELMo models trained on 13 different languages, probing for two different syntactic annotation styles: Universal Dependencies (UD), prioritizing deep syntactic relations, and Surface-Syntactic Universal Dependencies (SUD), focusing on surface structure. We find that both models exhibit a preference for UD over SUD - with interesting variations across languages and layers - and that the strength ... : ACL 2020 ...
|
|
Keyword:
Computation and Language cs.CL; FOS Computer and information sciences; Machine Learning cs.LG
|
|
URL: https://dx.doi.org/10.48550/arxiv.2004.14096 https://arxiv.org/abs/2004.14096
|
|
BASE
|
|
Hide details
|
|
|
|